15,680 research outputs found
Second order ancillary: A differential view from continuity
Second order approximate ancillaries have evolved as the primary ingredient
for recent likelihood development in statistical inference. This uses quantile
functions rather than the equivalent distribution functions, and the intrinsic
ancillary contour is given explicitly as the plug-in estimate of the vector
quantile function. The derivation uses a Taylor expansion of the full quantile
function, and the linear term gives a tangent to the observed ancillary
contour. For the scalar parameter case, there is a vector field that integrates
to give the ancillary contours, but for the vector case, there are multiple
vector fields and the Frobenius conditions for mutual consistency may not hold.
We demonstrate, however, that the conditions hold in a restricted way and that
this verifies the second order ancillary contours in moderate deviations. The
methodology can generate an appropriate exact ancillary when such exists or an
approximate ancillary for the numerical or Monte Carlo calculation of
-values and confidence quantiles. Examples are given, including nonlinear
regression and several enigmatic examples from the literature.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ248 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Higher Accuracy for Bayesian and Frequentist Inference: Large Sample Theory for Small Sample Likelihood
Recent likelihood theory produces -values that have remarkable accuracy
and wide applicability. The calculations use familiar tools such as maximum
likelihood values (MLEs), observed information and parameter rescaling. The
usual evaluation of such -values is by simulations, and such simulations do
verify that the global distribution of the -values is uniform(0, 1), to high
accuracy in repeated sampling. The derivation of the -values, however,
asserts a stronger statement, that they have a uniform(0, 1) distribution
conditionally, given identified precision information provided by the data. We
take a simple regression example that involves exact precision information and
use large sample techniques to extract highly accurate information as to the
statistical position of the data point with respect to the parameter:
specifically, we examine various -values and Bayesian posterior survivor
-values for validity. With observed data we numerically evaluate the various
-values and -values, and we also record the related general formulas. We
then assess the numerical values for accuracy using Markov chain Monte Carlo
(McMC) methods. We also propose some third-order likelihood-based procedures
for obtaining means and variances of Bayesian posterior distributions, again
followed by McMC assessment. Finally we propose some adaptive McMC methods to
improve the simulation acceptance rates. All these methods are based on
asymptotic analysis that derives from the effect of additional data. And the
methods use simple calculations based on familiar maximizing values and related
informations. The example illustrates the general formulas and the ease of
calculations, while the McMC assessments demonstrate the numerical validity of
the -values as percentage position of a data point. The example, however, is
very simple and transparent, and thus gives little indication that in a wide
generality of models the formulas do accurately separate information for almost
any parameter of interest, and then do give accurate -value determinations
from that information. As illustration an enigmatic problem in the literature
is discussed and simulations are recorded; various examples in the literature
are cited.Comment: Published in at http://dx.doi.org/10.1214/07-STS240 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Rejoinder to "Is Bayes Posterior just Quick and Dirty Confidence?"
Rejoinder to "Is Bayes Posterior just Quick and Dirty Confidence?" by D. A.
S. Fraser [arXiv:1112.5582]Comment: Published in at http://dx.doi.org/10.1214/11-STS352REJ the
Statistical Science (http://www.imstat.org/sts/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Recommended from our members
MobGeoSen: facilitating personal geosensor data collection and visualization using mobile phones
Mobile sensing and mapping applications are becoming more prevalent because sensing hardware is becoming more portable and more affordable. However, most of the hardware uses small numbers of fixed sensors that report and share multiple sets of environmental data which raises privacy concerns. Instead, these systems can be decentralized and managed by individuals in their public and private spaces. This paper describes a robust system called MobGeoSens which enables individuals to monitor their local environment (e.g. pollution and temperature) and their private spaces (e.g. activities and health) by using mobile phones in their day to day life
Recommended from our members
Enabling Future Sustainability Transitions: An Urban Metabolism Approach to Los Angeles Pincetl et al. Enabling Future Sustainability Transitions
Summary: This synthesis article presents an overview of an urban metabolism (UM) approach using mixed methods and multiple sources of data for Los Angeles, California. We examine electric energy use in buildings and greenhouse gas emissions from electricity, and calculate embedded infrastructure life cycle effects, water use and solid waste streams in an attempt to better understand the urban flows and sinks in the Los Angeles region (city and county). This quantification is being conducted to help policy-makers better target energy conservation and efficiency programs, pinpoint best locations for distributed solar generation, and support the development of policies for greater environmental sustainability. It provides a framework to which many more UM flows can be added to create greater understanding of the study area's resource dependencies. Going forward, together with policy analysis, UM can help untangle the complex intertwined resource dependencies that cities must address as they attempt to increase their environmental sustainability
Inferential models: A framework for prior-free posterior probabilistic inference
Posterior probabilistic statistical inference without priors is an important
but so far elusive goal. Fisher's fiducial inference, Dempster-Shafer theory of
belief functions, and Bayesian inference with default priors are attempts to
achieve this goal but, to date, none has given a completely satisfactory
picture. This paper presents a new framework for probabilistic inference, based
on inferential models (IMs), which not only provides data-dependent
probabilistic measures of uncertainty about the unknown parameter, but does so
with an automatic long-run frequency calibration property. The key to this new
approach is the identification of an unobservable auxiliary variable associated
with observable data and unknown parameter, and the prediction of this
auxiliary variable with a random set before conditioning on data. Here we
present a three-step IM construction, and prove a frequency-calibration
property of the IM's belief function under mild conditions. A corresponding
optimality theory is developed, which helps to resolve the non-uniqueness
issue. Several examples are presented to illustrate this new approach.Comment: 29 pages with 3 figures. Main text is the same as the published
version. Appendix B is an addition, not in the published version, that
contains some corrections and extensions of two of the main theorem
- …